由于数据集的多样性,姿势估计量的概括能力很差。为了解决这个问题,我们通过DH向前运动学模型提出了姿势增强解决方案,我们称之为DH-AUG。我们观察到,先前的工作都是基于单帧姿势增强的,如果将其直接应用于视频姿势估计器,则将存在一些先前忽略的问题:(i)骨旋转的角度歧义(多个溶液); (ii)生成的骨骼视频缺乏运动连续性。为了解决这些问题,我们提出了一个基于DH正向运动学模型的特殊发电机,该模型称为DH生成器。广泛的实验表明,DH-AUG可以大大提高视频姿势估计器的概括能力。另外,当应用于单帧3D姿势估计器时,我们的方法的表现优于先前的最佳姿势增强方法。源代码已在https://github.com/hlz0606/dh-aug-dh-forward-kinematics-model-driven-driven-augmentation-for-3d-human pose-easteration上发布。
translated by 谷歌翻译
基于卷积神经网络的面部伪造检测方法在训练过程中取得了显着的结果,但在测试过程中努力保持可比的性能。我们观察到,检测器比人工制品痕迹更容易专注于内容信息,这表明检测器对数据集的内在偏置敏感,这会导致严重的过度拟合。在这一关键观察的激励下,我们设计了一个易于嵌入的拆卸框架,以删除内容信息,并进一步提出内容一致性约束(C2C)和全球表示对比度约束(GRCC),以增强分解特征的独立性。此外,我们巧妙地构建了两个不平衡的数据集来研究内容偏差的影响。广泛的可视化和实验表明,我们的框架不仅可以忽略内容信息的干扰,而且还可以指导探测器挖掘可疑的人工痕迹并实现竞争性能。
translated by 谷歌翻译
Multiview检测使用多个校准摄像机,并具有重叠的视野来定位遮挡的行人。在该领域,现有方法通常采用``人类建模 - 聚合''策略。为了找到强大的行人表示,有些人直观地使用检测到的2D边界框的位置,而另一些则使用投影到地面上的整个框架功能。但是,前者不考虑人类的外表,并导致许多歧义,而后者由于缺乏人类躯干和头部的准确高度而遭受投影错误。在本文中,我们提出了一种基于人类点云建模的新行人代表方案。具体而言,使用射线跟踪进行整体人类深度估计,我们将行人建模为直立的,薄的纸板点云。然后,我们通过多个视图汇总了行人纸板的点云以进行最终决定。与现有表示形式相比,提出的方法明确利用人类的外观并通过相对准确的高度估计大大减少投影误差。在两个标准评估基准上,提出的方法取得了非常具竞争力的结果。
translated by 谷歌翻译
随着GAN的出现,面部伪造技术被严重滥用。即将实现准确的伪造检测。受到PPG信号对应于脸部视频中心跳引起的肤色的周期性变化的启发,我们观察到,尽管在伪造过程中不可避免地损失了PPG信号,但仍然存在PPG信号的混合物,但PPG信号的混合伪造视频具有独特的节奏模式,具体取决于其生成方法。在这一关键观察中,我们提出了一个针对面孔检测和分类的框架,包括:1)用于PPG信号过滤的时空滤波网络(STFNET),以及2)用于约束和约束的时空交互网络(stinet) PPG信号的相互作用。此外,通过深入了解伪造方法的产生,我们进一步提出了源头和源中的材料,以提高框架的性能。总体而言,广泛的实验证明了我们方法的优势。
translated by 谷歌翻译
多视图检测包含多个相机视图,以减轻拥挤的场景中的闭塞,最先进的方法采用单独的转换来将多视图功能投影到地面平面。然而,我们发现这些2D变换不考虑物体的高度,并且这种疏忽沿着相同对象的垂直方向的忽略特征可能不会投影到相同的接地平面上,导致不纯的接地平面特征。为了解决这个问题,我们提出了VFA,Voxized 3D特征聚合,用于多视图检测中的功能转换和聚合。具体而言,我们将3D空间体制出来,将体素投影到每个相机视图上,并将2D功能与这些投影的体素相关联。这允许我们沿相同的垂直线识别然后聚合2D特征,在很大程度上减轻投影失真。此外,由于不同种类的物体(人与牛)在地面上具有不同的形状,因此我们引入了定向的高斯编码以匹配这种形状,从而提高准确性和效率。我们对多视图2D检测和多视图3D检测问题进行实验。结果四个数据集(包括新引入的Multiviewc数据集)表明,与最先进的方法相比,我们的系统与最有竞争力。 %我们的代码和数据将是开放的.code和multiviewc在https://github.com/robert-mar/vfa发布。
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译
Decompilation aims to transform a low-level program language (LPL) (eg., binary file) into its functionally-equivalent high-level program language (HPL) (e.g., C/C++). It is a core technology in software security, especially in vulnerability discovery and malware analysis. In recent years, with the successful application of neural machine translation (NMT) models in natural language processing (NLP), researchers have tried to build neural decompilers by borrowing the idea of NMT. They formulate the decompilation process as a translation problem between LPL and HPL, aiming to reduce the human cost required to develop decompilation tools and improve their generalizability. However, state-of-the-art learning-based decompilers do not cope well with compiler-optimized binaries. Since real-world binaries are mostly compiler-optimized, decompilers that do not consider optimized binaries have limited practical significance. In this paper, we propose a novel learning-based approach named NeurDP, that targets compiler-optimized binaries. NeurDP uses a graph neural network (GNN) model to convert LPL to an intermediate representation (IR), which bridges the gap between source code and optimized binary. We also design an Optimized Translation Unit (OTU) to split functions into smaller code fragments for better translation performance. Evaluation results on datasets containing various types of statements show that NeurDP can decompile optimized binaries with 45.21% higher accuracy than state-of-the-art neural decompilation frameworks.
translated by 谷歌翻译
Image Virtual try-on aims at replacing the cloth on a personal image with a garment image (in-shop clothes), which has attracted increasing attention from the multimedia and computer vision communities. Prior methods successfully preserve the character of clothing images, however, occlusion remains a pernicious effect for realistic virtual try-on. In this work, we first present a comprehensive analysis of the occlusions and categorize them into two aspects: i) Inherent-Occlusion: the ghost of the former cloth still exists in the try-on image; ii) Acquired-Occlusion: the target cloth warps to the unreasonable body part. Based on the in-depth analysis, we find that the occlusions can be simulated by a novel semantically-guided mixup module, which can generate semantic-specific occluded images that work together with the try-on images to facilitate training a de-occlusion try-on (DOC-VTON) framework. Specifically, DOC-VTON first conducts a sharpened semantic parsing on the try-on person. Aided by semantics guidance and pose prior, various complexities of texture are selectively blending with human parts in a copy-and-paste manner. Then, the Generative Module (GM) is utilized to take charge of synthesizing the final try-on image and learning to de-occlusion jointly. In comparison to the state-of-the-art methods, DOC-VTON achieves better perceptual quality by reducing occlusion effects.
translated by 谷歌翻译
In recent years, the Transformer architecture has shown its superiority in the video-based person re-identification task. Inspired by video representation learning, these methods mainly focus on designing modules to extract informative spatial and temporal features. However, they are still limited in extracting local attributes and global identity information, which are critical for the person re-identification task. In this paper, we propose a novel Multi-Stage Spatial-Temporal Aggregation Transformer (MSTAT) with two novel designed proxy embedding modules to address the above issue. Specifically, MSTAT consists of three stages to encode the attribute-associated, the identity-associated, and the attribute-identity-associated information from the video clips, respectively, achieving the holistic perception of the input person. We combine the outputs of all the stages for the final identification. In practice, to save the computational cost, the Spatial-Temporal Aggregation (STA) modules are first adopted in each stage to conduct the self-attention operations along the spatial and temporal dimensions separately. We further introduce the Attribute-Aware and Identity-Aware Proxy embedding modules (AAP and IAP) to extract the informative and discriminative feature representations at different stages. All of them are realized by employing newly designed self-attention operations with specific meanings. Moreover, temporal patch shuffling is also introduced to further improve the robustness of the model. Extensive experimental results demonstrate the effectiveness of the proposed modules in extracting the informative and discriminative information from the videos, and illustrate the MSTAT can achieve state-of-the-art accuracies on various standard benchmarks.
translated by 谷歌翻译
Temporal sentence grounding (TSG) aims to identify the temporal boundary of a specific segment from an untrimmed video by a sentence query. All existing works first utilize a sparse sampling strategy to extract a fixed number of video frames and then conduct multi-modal interactions with query sentence for reasoning. However, we argue that these methods have overlooked two indispensable issues: 1) Boundary-bias: The annotated target segment generally refers to two specific frames as corresponding start and end timestamps. The video downsampling process may lose these two frames and take the adjacent irrelevant frames as new boundaries. 2) Reasoning-bias: Such incorrect new boundary frames also lead to the reasoning bias during frame-query interaction, reducing the generalization ability of model. To alleviate above limitations, in this paper, we propose a novel Siamese Sampling and Reasoning Network (SSRN) for TSG, which introduces a siamese sampling mechanism to generate additional contextual frames to enrich and refine the new boundaries. Specifically, a reasoning strategy is developed to learn the inter-relationship among these frames and generate soft labels on boundaries for more accurate frame-query reasoning. Such mechanism is also able to supplement the absent consecutive visual semantics to the sampled sparse frames for fine-grained activity understanding. Extensive experiments demonstrate the effectiveness of SSRN on three challenging datasets.
translated by 谷歌翻译